Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 16(5): e0251493, 2021.
Article in English | MEDLINE | ID: mdl-33974653

ABSTRACT

Classification schemes for scientific activity and publications underpin a large swath of research evaluation practices at the organizational, governmental, and national levels. Several research classifications are currently in use, and they require continuous work as new classification techniques becomes available and as new research topics emerge. Convolutional neural networks, a subset of "deep learning" approaches, have recently offered novel and highly performant methods for classifying voluminous corpora of text. This article benchmarks a deep learning classification technique on more than 40 million scientific articles and on tens of thousands of scholarly journals. The comparison is performed against bibliographic coupling-, direct citation-, and manual-based classifications-the established and most widely used approaches in the field of bibliometrics, and by extension, in many science and innovation policy activities such as grant competition management. The results reveal that the performance of this first iteration of a deep learning approach is equivalent to the graph-based bibliometric approaches. All methods presented are also on par with manual classification. Somewhat surprisingly, no machine learning approaches were found to clearly outperform the simple label propagation approach that is direct citation. In conclusion, deep learning is promising because it performed just as well as the other approaches but has more flexibility to be further improved. For example, a deep neural network incorporating information from the citation network is likely to hold the key to an even better classification algorithm.


Subject(s)
Bibliometrics , Deep Learning , Publications/classification , Science , Benchmarking , Bibliographies as Topic , Databases, Bibliographic , Scholarly Communication/statistics & numerical data
2.
PLoS One ; 8(1): e53943, 2013.
Article in English | MEDLINE | ID: mdl-23372677

ABSTRACT

Community detection helps us simplify the complex configuration of networks, but communities are reliable only if they are statistically significant. To detect statistically significant communities, a common approach is to resample the original network and analyze the communities. But resampling assumes independence between samples, while the components of a network are inherently dependent. Therefore, we must understand how breaking dependencies between resampled components affects the results of the significance analysis. Here we use scientific communication as a model system to analyze this effect. Our dataset includes citations among articles published in journals in the years 1984-2010. We compare parametric resampling of citations with non-parametric article resampling. While citation resampling breaks link dependencies, article resampling maintains such dependencies. We find that citation resampling underestimates the variance of link weights. Moreover, this underestimation explains most of the differences in the significance analysis of ranking and clustering. Therefore, when only link weights are available and article resampling is not an option, we suggest a simple parametric resampling scheme that generates link-weight variances close to the link-weight variances of article resampling. Nevertheless, when we highlight and summarize important structural changes in science, the more dependencies we can maintain in the resampling scheme, the earlier we can predict structural change.


Subject(s)
Models, Statistical , Publishing/statistics & numerical data , Bibliometrics , Cluster Analysis , Humans
4.
Healthc Pap ; 5(2): 133-40, 2004.
Article in English | MEDLINE | ID: mdl-15829775

ABSTRACT

This paper uses the Medline biomedical papers database to measure scientific production on mental health in the workplace (MHWP) during the 1991"2002 period at the world, Canadian, provincial, urban, institutional and researcher levels. The level of scientific output has doubled at the world level and tripled at the Canadian level during the last 12 years. At the provincial level, Ontario, Quebec, British Columbia and Alberta are leading in absolute number of papers. Ontario largely dominates both in terms of output and on a per capita basis. At the level of cities, Toronto and Montreal are the largest producers of papers on MHWP. The most important institutions in terms of papers on MHWP are McMaster University, Universite de Montreal, the University of Toronto, the University of British Columbia and the University of Western Ontario. The universities with the largest number of active researchers in MHWP are McMaster University, Universite Laval and York University.


Subject(s)
Mental Health , Workplace , Bibliometrics , British Columbia , Canada , Humans , Ontario , Research , Research Personnel , Universities
SELECTION OF CITATIONS
SEARCH DETAIL
...